在这项工作中,我们提出并评估了一种新的增强学习方法,紧凑体验重放(编者),它使用基于相似转换集的复发的预测目标值的时间差异学习,以及基于两个转换的经验重放的新方法记忆。我们的目标是减少在长期累计累计奖励的经纪人培训所需的经验。它与强化学习的相关性与少量观察结果有关,即它需要实现类似于文献中的相关方法获得的结果,这通常需要数百万视频框架来培训ATARI 2600游戏。我们举报了在八个挑战街机学习环境(ALE)挑战游戏中,为仅10万帧的培训试验和大约25,000次迭代的培训试验中报告了培训试验。我们还在与基线的同一游戏中具有相同的实验协议的DQN代理呈现结果。为了验证从较少数量的观察结果近似于良好的政策,我们还将其结果与从啤酒的基准上呈现的数百万帧中获得的结果进行比较。
translated by 谷歌翻译
We describe a Physics-Informed Neural Network (PINN) that simulates the flow induced by the astronomical tide in a synthetic port channel, with dimensions based on the Santos - S\~ao Vicente - Bertioga Estuarine System. PINN models aim to combine the knowledge of physical systems and data-driven machine learning models. This is done by training a neural network to minimize the residuals of the governing equations in sample points. In this work, our flow is governed by the Navier-Stokes equations with some approximations. There are two main novelties in this paper. First, we design our model to assume that the flow is periodic in time, which is not feasible in conventional simulation methods. Second, we evaluate the benefit of resampling the function evaluation points during training, which has a near zero computational cost and has been verified to improve the final model, especially for small batch sizes. Finally, we discuss some limitations of the approximations used in the Navier-Stokes equations regarding the modeling of turbulence and how it interacts with PINNs.
translated by 谷歌翻译
研究人员通常会采用数值方法来理解和预测海洋动力学,这是掌握环境现象的关键任务。在地形图很复杂,有关基础过程的知识不完整或应用程序至关重要的情况下,此类方法可能不适合。另一方面,如果观察到海洋动力学,则可以通过最近的机器学习方法来利用它们。在本文中,我们描述了一种数据驱动的方法,可以预测环境变量,例如巴西东南海岸的Santos-Sao Vicente-Bertioga estuarine系统的当前速度和海面高度。我们的模型通过连接最新的序列模型(LSTM和Transformers)以及关系模型(图神经网络)来利用时间和空间归纳偏见,以学习时间特征和空间特征,观察站点之间共享的关系。我们将结果与桑托斯运营预测系统(SOFS)进行比较。实验表明,我们的模型可以实现更好的结果,同时保持灵活性和很少的领域知识依赖性。
translated by 谷歌翻译
最近的一些作品已经采用了决策树,以建造可解释的分区,旨在最大限度地减少$ k $ -means成本函数。然而,这些作品在很大程度上忽略了与所得到的树中叶子的深度相关的度量,这考虑到决策树的解释性如何取决于这些深度,这可能令人惊讶。为了填补文献中的这种差距,我们提出了一种有效的算法,它考虑了这些指标。在7个数据集上的实验中,我们的算法产生的结果比决策树聚类算法,例如\ Cite {dasgupta2020explainplainable},\ cite {frost2020exkmc},\ cite {laber2021price}和\ cite {dblp:conf / icml / Makarychevs21}通常以相当浅的树木实现较低或等同的成本。我们还通过简单适应现有技术来表明,用k $ -means成本函数的二叉树引起的可解释的分区的问题不承认多项式时间中的$(1+ \ epsilon)$ - 近似$ p = np $,证明Questies Quest attmation算法和/或启发式。
translated by 谷歌翻译
尽管高容量计算平台的可用性日益增长,但实施复杂性仍然是神经网络现实部署的重要问题。这种关注并不仅仅是由于最先进的网络体系结构的巨大成本,也是由于最近朝着边缘智能和嵌入式应用中使用神经网络的使用。在这种情况下,网络压缩技术由于能够降低部署成本的能力,同时将推断准确性保持在令人满意的水平,因此引起了兴趣。本文致力于开发针对神经网络的新型压缩方案。为此,首先开发了一种新的$ \ ell_0 $ -norm正规化方法,该方法能够在培训期间诱导网络中的强烈稀疏性。然后,可以通过修剪技术来瞄准训练有素的网络的较小权重,可以获得较小但高效的网络。提出的压缩方案还涉及使用$ \ ell_2 $ -Norm正则化以避免过度拟合以及进行微调以提高修剪网络的性能。提出了实验结果,目的是显示拟议方案的有效性,并与竞争方法进行比较。
translated by 谷歌翻译
A Digital Twin (DT) is a simulation of a physical system that provides information to make decisions that add economic, social or commercial value. The behaviour of a physical system changes over time, a DT must therefore be continually updated with data from the physical systems to reflect its changing behaviour. For resource-constrained systems, updating a DT is non-trivial because of challenges such as on-board learning and the off-board data transfer. This paper presents a framework for updating data-driven DTs of resource-constrained systems geared towards system health monitoring. The proposed solution consists of: (1) an on-board system running a light-weight DT allowing the prioritisation and parsimonious transfer of data generated by the physical system; and (2) off-board robust updating of the DT and detection of anomalous behaviours. Two case studies are considered using a production gas turbine engine system to demonstrate the digital representation accuracy for real-world, time-varying physical systems.
translated by 谷歌翻译
Training agents via off-policy deep reinforcement learning (RL) requires a large memory, named replay memory, that stores past experiences used for learning. These experiences are sampled, uniformly or non-uniformly, to create the batches used for training. When calculating the loss function, off-policy algorithms assume that all samples are of the same importance. In this paper, we hypothesize that training can be enhanced by assigning different importance for each experience based on their temporal-difference (TD) error directly in the training objective. We propose a novel method that introduces a weighting factor for each experience when calculating the loss function at the learning stage. In addition to improving convergence speed when used with uniform sampling, the method can be combined with prioritization methods for non-uniform sampling. Combining the proposed method with prioritization methods improves sampling efficiency while increasing the performance of TD-based off-policy RL algorithms. The effectiveness of the proposed method is demonstrated by experiments in six environments of the OpenAI Gym suite. The experimental results demonstrate that the proposed method achieves a 33%~76% reduction of convergence speed in three environments and an 11% increase in returns and a 3%~10% increase in success rate for other three environments.
translated by 谷歌翻译
The Elo algorithm, due to its simplicity, is widely used for rating in sports competitions as well as in other applications where the rating/ranking is a useful tool for predicting future results. However, despite its widespread use, a detailed understanding of the convergence properties of the Elo algorithm is still lacking. Aiming to fill this gap, this paper presents a comprehensive (stochastic) analysis of the Elo algorithm, considering round-robin (one-on-one) competitions. Specifically, analytical expressions are derived characterizing the behavior/evolution of the skills and of important performance metrics. Then, taking into account the relationship between the behavior of the algorithm and the step-size value, which is a hyperparameter that can be controlled, some design guidelines as well as discussions about the performance of the algorithm are provided. To illustrate the applicability of the theoretical findings, experimental results are shown, corroborating the very good match between analytical predictions and those obtained from the algorithm using real-world data (from the Italian SuperLega, Volleyball League).
translated by 谷歌翻译
Most TextVQA approaches focus on the integration of objects, scene texts and question words by a simple transformer encoder. But this fails to capture the semantic relations between different modalities. The paper proposes a Scene Graph based co-Attention Network (SceneGATE) for TextVQA, which reveals the semantic relations among the objects, Optical Character Recognition (OCR) tokens and the question words. It is achieved by a TextVQA-based scene graph that discovers the underlying semantics of an image. We created a guided-attention module to capture the intra-modal interplay between the language and the vision as a guidance for inter-modal interactions. To make explicit teaching of the relations between the two modalities, we proposed and integrated two attention modules, namely a scene graph-based semantic relation-aware attention and a positional relation-aware attention. We conducted extensive experiments on two benchmark datasets, Text-VQA and ST-VQA. It is shown that our SceneGATE method outperformed existing ones because of the scene graph and its attention modules.
translated by 谷歌翻译
Existing analyses of neural network training often operate under the unrealistic assumption of an extremely small learning rate. This lies in stark contrast to practical wisdom and empirical studies, such as the work of J. Cohen et al. (ICLR 2021), which exhibit startling new phenomena (the "edge of stability" or "unstable convergence") and potential benefits for generalization in the large learning rate regime. Despite a flurry of recent works on this topic, however, the latter effect is still poorly understood. In this paper, we take a step towards understanding genuinely non-convex training dynamics with large learning rates by performing a detailed analysis of gradient descent for simplified models of two-layer neural networks. For these models, we provably establish the edge of stability phenomenon and discover a sharp phase transition for the step size below which the neural network fails to learn "threshold-like" neurons (i.e., neurons with a non-zero first-layer bias). This elucidates one possible mechanism by which the edge of stability can in fact lead to better generalization, as threshold neurons are basic building blocks with useful inductive bias for many tasks.
translated by 谷歌翻译